Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 118
Filter
1.
Journal of Biomedical Engineering ; (6): 492-498, 2023.
Article in Chinese | WPRIM | ID: wpr-981567

ABSTRACT

Non-rigid registration plays an important role in medical image analysis. U-Net has been proven to be a hot research topic in medical image analysis and is widely used in medical image registration. However, existing registration models based on U-Net and its variants lack sufficient learning ability when dealing with complex deformations, and do not fully utilize multi-scale contextual information, resulting insufficient registration accuracy. To address this issue, a non-rigid registration algorithm for X-ray images based on deformable convolution and multi-scale feature focusing module was proposed. First, it used residual deformable convolution to replace the standard convolution of the original U-Net to enhance the expression ability of registration network for image geometric deformations. Then, stride convolution was used to replace the pooling operation of the downsampling operation to alleviate feature loss caused by continuous pooling. In addition, a multi-scale feature focusing module was introduced to the bridging layer in the encoding and decoding structure to improve the network model's ability of integrating global contextual information. Theoretical analysis and experimental results both showed that the proposed registration algorithm could focus on multi-scale contextual information, handle medical images with complex deformations, and improve the registration accuracy. It is suitable for non-rigid registration of chest X-ray images.


Subject(s)
Algorithms , Learning , Thorax
2.
Journal of Biomedical Engineering ; (6): 482-491, 2023.
Article in Chinese | WPRIM | ID: wpr-981566

ABSTRACT

Recently, deep learning has achieved impressive results in medical image tasks. However, this method usually requires large-scale annotated data, and medical images are expensive to annotate, so it is a challenge to learn efficiently from the limited annotated data. Currently, the two commonly used methods are transfer learning and self-supervised learning. However, these two methods have been little studied in multimodal medical images, so this study proposes a contrastive learning method for multimodal medical images. The method takes images of different modalities of the same patient as positive samples, which effectively increases the number of positive samples in the training process and helps the model to fully learn the similarities and differences of lesions on images of different modalities, thus improving the model's understanding of medical images and diagnostic accuracy. The commonly used data augmentation methods are not suitable for multimodal images, so this paper proposes a domain adaptive denormalization method to transform the source domain images with the help of statistical information of the target domain. In this study, the method is validated with two different multimodal medical image classification tasks: in the microvascular infiltration recognition task, the method achieves an accuracy of (74.79 ± 0.74)% and an F1 score of (78.37 ± 1.94)%, which are improved as compared with other conventional learning methods; for the brain tumor pathology grading task, the method also achieves significant improvements. The results show that the method achieves good results on multimodal medical images and can provide a reference solution for pre-training multimodal medical images.


Subject(s)
Humans , Algorithms , Brain/diagnostic imaging , Brain Neoplasms/diagnostic imaging , Recognition, Psychology
3.
Journal of Biomedical Engineering ; (6): 392-400, 2023.
Article in Chinese | WPRIM | ID: wpr-981555

ABSTRACT

Medical image segmentation based on deep learning has become a powerful tool in the field of medical image processing. Due to the special nature of medical images, image segmentation algorithms based on deep learning face problems such as sample imbalance, edge blur, false positive, false negative, etc. In view of these problems, researchers mostly improve the network structure, but rarely improve from the unstructured aspect. The loss function is an important part of the segmentation method based on deep learning. The improvement of the loss function can improve the segmentation effect of the network from the root, and the loss function is independent of the network structure, which can be used in various network models and segmentation tasks in plug and play. Starting from the difficulties in medical image segmentation, this paper first introduces the loss function and improvement strategies to solve the problems of sample imbalance, edge blur, false positive and false negative. Then the difficulties encountered in the improvement of the current loss function are analyzed. Finally, the future research directions are prospected. This paper provides a reference for the reasonable selection, improvement or innovation of loss function, and guides the direction for the follow-up research of loss function.


Subject(s)
Algorithms , Image Processing, Computer-Assisted
4.
Journal of Biomedical Engineering ; (6): 208-216, 2023.
Article in Chinese | WPRIM | ID: wpr-981531

ABSTRACT

Aiming at the problems of missing important features, inconspicuous details and unclear textures in the fusion of multimodal medical images, this paper proposes a method of computed tomography (CT) image and magnetic resonance imaging (MRI) image fusion using generative adversarial network (GAN) and convolutional neural network (CNN) under image enhancement. The generator aimed at high-frequency feature images and used double discriminators to target the fusion images after inverse transform; Then high-frequency feature images were fused by trained GAN model, and low-frequency feature images were fused by CNN pre-training model based on transfer learning. Experimental results showed that, compared with the current advanced fusion algorithm, the proposed method had more abundant texture details and clearer contour edge information in subjective representation. In the evaluation of objective indicators, Q AB/F, information entropy (IE), spatial frequency (SF), structural similarity (SSIM), mutual information (MI) and visual information fidelity for fusion (VIFF) were 2.0%, 6.3%, 7.0%, 5.5%, 9.0% and 3.3% higher than the best test results, respectively. The fused image can be effectively applied to medical diagnosis to further improve the diagnostic efficiency.


Subject(s)
Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Tomography, X-Ray Computed , Magnetic Resonance Imaging/methods , Algorithms
5.
Chinese Journal of Medical Instrumentation ; (6): 61-65, 2023.
Article in Chinese | WPRIM | ID: wpr-971304

ABSTRACT

In order to alleviate the conflict between medical supply and demand, and to improve the efficiency of medical image transmission, this study proposes an intelligent method for large-volume medical image transmission. This method extracts and generates keyword pairs by analyzing medical diagnostic reports, and uses a 3D-UNet to segment original image data into various sub-area based on its anatomy structure. Then, the sub-areas are scored through keyword pairs and preset scoring criteria, and transmitted to user frontend in the order of prioritization score. Experiments show that this method can fulfill physicians' requirements of radiology reading and diagnosis with only ten percent of data transmitted, which efficiently optimized traditional transmission procedures.

6.
Journal of Biomedical Engineering ; (6): 185-192, 2023.
Article in Chinese | WPRIM | ID: wpr-970690

ABSTRACT

Computer-aided diagnosis (CAD) systems play a very important role in modern medical diagnosis and treatment systems, but their performance is limited by training samples. However, the training samples are affected by factors such as imaging cost, labeling cost and involving patient privacy, resulting in insufficient diversity of training images and difficulty in data obtaining. Therefore, how to efficiently and cost-effectively augment existing medical image datasets has become a research hotspot. In this paper, the research progress on medical image dataset expansion methods is reviewed based on relevant literatures at home and abroad. First, the expansion methods based on geometric transformation and generative adversarial networks are compared and analyzed, and then improvement of the augmentation methods based on generative adversarial networks are emphasized. Finally, some urgent problems in the field of medical image dataset expansion are discussed and the future development trend is prospected.


Subject(s)
Humans , Diagnosis, Computer-Assisted , Diagnostic Imaging , Datasets as Topic
7.
Digital Chinese Medicine ; (4): 406-418, 2022.
Article in English | WPRIM | ID: wpr-964350

ABSTRACT

Objective@#For computer-aided Chinese medical diagnosis and aiming at the problem of insufficient segmentation, a novel multi-level method based on the multi-scale fusion residual neural network (MF2ResU-Net) model is proposed.@*Methods@#To obtain refined features of retinal blood vessels, three cascade connected U-Net networks are employed. To deal with the problem of difference between the parts of encoder and decoder, in MF2ResU-Net, shortcut connections are used to combine the encoder and decoder layers in the blocks. To refine the feature of segmentation, atrous spatial pyramid pooling (ASPP) is embedded to achieve multi-scale features for the final segmentation networks.@*Results@#The MF2ResU-Net was superior to the existing methods on the criteria of sensitivity (Sen), specificity (Spe), accuracy (ACC), and area under curve (AUC), the values of which are 0.8013 and 0.8102, 0.9842 and 0.9809, 0.9700 and 0.9776, and 0.9797 and 0.9837, respectively for DRIVE and CHASE DB1. The results of experiments demonstrated the effectiveness and robustness of the model in the segmentation of complex curvature and small blood vessels.@*Conclusion@#Based on residual connections and multi-feature fusion, the proposed method can obtain accurate segmentation of retinal blood vessels by refining the segmentation features, which can provide another diagnosis method for computer-aided Chinese medical diagnosis.

8.
Journal of Biomedical Engineering ; (6): 1218-1232, 2022.
Article in Chinese | WPRIM | ID: wpr-970661

ABSTRACT

In recent years, the task of object detection and segmentation in medical image is the research hotspot and difficulty in the field of image processing. Instance segmentation provides instance-level labels for different objects belonging to the same class, so it is widely used in the field of medical image processing. In this paper, medical image instance segmentation was summarized from the following aspects: First, the basic principle of instance segmentation was described, the instance segmentation models were classified into three categories, the development context of the instance segmentation algorithm was displayed in two-dimensional space, and six classic model diagrams of instance segmentation were given. Second, from the perspective of the three models of two-stage instance segmentation, single-stage instance segmentation and three-dimensional (3D) instance segmentation, we summarized the ideas of the three types of models, discussed the advantages and disadvantages, and sorted out the latest developments. Third, the application status of instance segmentation in six medical images such as colon tissue image, cervical image, bone imaging image, pathological section image of gastric cancer, computed tomography (CT) image of lung nodule and X-ray image of breast was summarized. Fourth, the main challenges in the field of medical image instance segmentation were discussed and the future development direction was prospected. In this paper, the principle, models and characteristics of instance segmentation are systematically summarized, as well as the application of instance segmentation in the field of medical image processing, which is of positive guiding significance to the study of instance segmentation.


Subject(s)
Imaging, Three-Dimensional/methods , Image Processing, Computer-Assisted , Tomography, X-Ray Computed/methods , Algorithms
9.
Journal of Biomedical Engineering ; (6): 1181-1188, 2022.
Article in Chinese | WPRIM | ID: wpr-970657

ABSTRACT

Intelligent medical image segmentation methods have been rapidly developed and applied, while a significant challenge is domain shift. That is, the segmentation performance degrades due to distribution differences between the source domain and the target domain. This paper proposed an unsupervised end-to-end domain adaptation medical image segmentation method based on the generative adversarial network (GAN). A network training and adjustment model was designed, including segmentation and discriminant networks. In the segmentation network, the residual module was used as the basic module to increase feature reusability and reduce model optimization difficulty. Further, it learned cross-domain features at the image feature level with the help of the discriminant network and a combination of segmentation loss with adversarial loss. The discriminant network took the convolutional neural network and used the labels from the source domain, to distinguish whether the segmentation result of the generated network is from the source domain or the target domain. The whole training process was unsupervised. The proposed method was tested with experiments on a public dataset of knee magnetic resonance (MR) images and the clinical dataset from our cooperative hospital. With our method, the mean Dice similarity coefficient (DSC) of segmentation results increased by 2.52% and 6.10% to the classical feature level and image level domain adaptive method. The proposed method effectively improves the domain adaptive ability of the segmentation method, significantly improves the segmentation accuracy of the tibia and femur, and can better solve the domain transfer problem in MR image segmentation.


Subject(s)
Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Magnetic Resonance Imaging , Knee , Knee Joint
10.
Chinese Journal of Radiation Oncology ; (6): 936-941, 2021.
Article in Chinese | WPRIM | ID: wpr-910495

ABSTRACT

Objective:To propose a method of image similarity measurement based on structure information and intuitionistic fuzzy set and measure the similarity between CT image and CBCT image of radiotherapy plan positioning, aiming to objectively measure the setup errors.Methods:A total of four pre-registration images of a nasopharyngeal carcinoma patient on the cross-sectional and sagittal planes and a pelvic tumor patient on the cross-sectional and coronal planes were randomly selected. Five methods were used to quantify the setup errors, including correlation coefficient, mean square error, image joint entropy, mutual information and similarity measure method.Results:All five methods could describe the deviation to a certain extent. Compared with other methods, the similarity measure method showed a stronger upward trend with the increase of errors. After normalization, the results of five types of error increase on the cross-sectional plane of the nasopharyngeal carcinoma patient were 0.553, 0.683, 1.055, 1.995, 5.151, and 1.171, 1.618, 1.962, 1.790, 3.572 on the sagittal plane, respectively. The results of other methods were between 0 and 2 after normalization, and the results of different errors of the same method slightly changed. In addition, the method was more sensitive to the soft tissue errors.Conclusions:The image similarity measurement method based on structure information and intuitionistic fuzzy set is more consistent with human eye perception than the existing evaluation methods. The errors between bone markers and soft tissues can be objectively quantified to certain extent. The soft tissue deviation reflected by the setup errors is of significance for individualized precision radiotherapy.

11.
Chinese Journal of Tissue Engineering Research ; (53): 632-637, 2021.
Article in Chinese | WPRIM | ID: wpr-847168

ABSTRACT

BACKGROUND: The application of three-dimensional (3D) precision printing in orthopedic rehabilitation medicine is attracting more and more attention from clinicians, engineers and researchers. OBJECTIVE: To review the development of 3D printing in orthopedic rehabilitation. METHODS: Relevant documents published from 2011 to 2019 were retrieved from CNKI database, Wanfang database, PubMed database and Elsevier database by computer. The search terms were “3D Precision Printing, Orthopedic rehabilitation medicine, Artificial Intelligence” in English and Chinese. RESULTS AND CONCLUSION: At present, 3D precision printing is the key technology in the application of orthopedic rehabilitation medicine, including medical image processing and 3D modeling, surgical simulation planning system, surgical guide board design, implant design and 3D precision printing equipment. Among them, 3D reconstruction technologies such as multiplanar reconstructions, volume rendering technique, maximum intensity projection, minimum intensity projection and surface shaded display are used to read medical digital data and realize visual medical image processing and 3D modeling, which can improve doctor-patient communication efficiency. The thinking process and intelligent behavior of doctors in rehabilitation surgery can be simulated by computer, and a cloud service center with function similar to the brain intelligence of doctors is created to assist doctors in the planning of rehabilitation surgery. With the aid of personalized rehabilitation surgery guide board design software, the bone graft osteotomy guide board can be used to plan the osteotomy line and osteotomy range for the doctor, which can shorten the operation time and improve the operation safety. Using polymer materials, metal materials, ceramic materials and other materials as 3D printing materials, there are still problems such as poor mechanical adaptation and physiological adaptation. The automation degree of common biological 3D design software is relatively low, which is prone to problems such as unsatisfactory matching between the performance of orthopedic medical devices and the defect site, and single internal pore structure of implants. However, the application of 3D precision printing technology in the immersion rehabilitation medicine teaching system is beneficial to the cultivation of rehabilitation medicine talents combining with interdisciplinary medical workers.

12.
Chinese Journal of Medical Instrumentation ; (6): 366-371, 2021.
Article in Chinese | WPRIM | ID: wpr-888625

ABSTRACT

Nowadays, the massive medical data have already influenced the information construction in medical institutes, so it is not enough to solely rely on traditional local storage system to solve the problems like the read/write speed, visualization, and economy brought about by the massive data. Furthermore, various medical cloud services have been developed at home and abroad that patients' medical data can be shared through all medical institutes on the cloud, which makes a higher demand on the transmission speed of the medical data. This article analyzes from multiple aspects like high availability and costs by performing a medical image transmission speed test on the three mainstream storage technologies to provide an optional storage system for future medical image big data in the access process. The experimental result shows that it can be found that in the process of accessing medical image big data, the access speed and performance of object storage system is better than those of the existing local storage systems. However, with comprehensive consideration, it is recommended that the distributed file storage system like HDFS be the first choice for the storage system of the medical images.


Subject(s)
Humans , Cloud Computing , Computer Communication Networks , Information Storage and Retrieval , Technology
13.
Chinese Journal of Medical Instrumentation ; (6): 420-424, 2020.
Article in Chinese | WPRIM | ID: wpr-942753

ABSTRACT

The development of medical image segmentation technology has been briefly reviewed. The applications of auto-segmentation of organs at risk and target volumes based on Atlas and deep learning in the field of radiotherapy have been introduced in detail, respectively. Then the development direction and product model for general automatic sketching tools or systems based on solid clinical data are discussed.


Subject(s)
Image Processing, Computer-Assisted , Radiotherapy/trends , Radiotherapy Planning, Computer-Assisted , Technology , Tomography, X-Ray Computed
14.
Braz. arch. biol. technol ; 63: e20180473, 2020. tab, graf
Article in English | LILACS | ID: biblio-1132223

ABSTRACT

Abstract Evolution of digital Health-care Information System established Medical Image Security as the new contemporary research area. Most of the researchers used either image watermarking or image encryption to address medical image security. However, very few proposals focused on both issues. This paper has implemented a Fast Medial Image Security algorithm for color images that uses both watermarking and encryption of each color channel. The proposed method starts with embedding of a smoothened key image (K) and patient information over the original image (I) to generate a watermarked image (W). Then, each color channel of the watermarked image (W) is encrypted separately to produce an encrypted image (E) using the same smoothened key image (K). This image can be transmitted over the public network and the original image (I) can be achieved using decryption algorithm followed by de-watermarking using the same key image (K) at the receiver. Qualitative and quantitative results of the proposed method show good performance when compared with the existing method with high Mean, PSNR and Entropy.


Subject(s)
Humans , Medical Informatics/standards , Diagnostic Imaging/standards , Computer Security , Algorithms
15.
Chinese Journal of Orthopaedic Trauma ; (12): 67-71, 2020.
Article in Chinese | WPRIM | ID: wpr-867823

ABSTRACT

Objective To evaluate the advantages of reconstructing by mimics software before operation a three-dimensional model of talar posterior process fracture which is to be used in the treatment of talar posterior process fracture through the posteromedial malleolar approach.Methods From May 2015 to February 2019,7 patients with talar posterior process fracture were treated at Department of Orthopaedic Trauma,Jishuitan Hospital.They were 5 men and 2 women,aged from 20 to 70 years (mean,39 years).They underwent routine CT examination preoperatively.Their posterior process of talus was reconstructed by Mimics software based on their CT scanning data before operation to determine the size,number and displacement of fracture fragments.Their fractures of posterior process of talus were treated by open reduction and screw fixation in prone position through the posterior ankle approach.The American Orthopedic Foot & Ankle Society (AOFAS) ankle-hindfoot scoring system was used to evaluate functional recovery.Results The operation time for this group ranged from 70 min to 105 min,averaging 87.1 min.Early after operation,the wounds healed well with no injury to nerves or tendons.All patients were followed up for 4 to 24 months (average,12 months).Follow-up by X-ray examination after 10 to 16 weeks revealed fracture union with no complications like screw breakage,nonunion,malunion or traumatic arthritis.Their AOFAS ankle-hindfoot scores at the final follow-up ranged from 80 to 98 points.Conclusion Preoperative three-dimensional reconstruction of talar posterior process fracture based on CT images using Mimics software can accurately determine the entry point and direction of screw insertion,yielding advantages of clear exposure,easy reduction and convenient screwing in the treatment of talar posterior process fracture through the posteromedial malleolar approach.

16.
Journal of Biomedical Engineering ; (6): 557-565, 2020.
Article in Chinese | WPRIM | ID: wpr-828134

ABSTRACT

Coronavirus disease 2019 (COVID-19) has spread rapidly around the world. In order to diagnose COVID-19 more quickly, in this paper, a depthwise separable DenseNet was proposed. The paper constructed a deep learning model with 2 905 chest X-ray images as experimental dataset. In order to enhance the contrast, the contrast limited adaptive histogram equalization (CLAHE) algorithm was used to preprocess the X-ray image before network training, then the images were put into the training network and the parameters of the network were adjusted to the optimal. Meanwhile, Leaky ReLU was selected as the activation function. VGG16, ResNet18, ResNet34, DenseNet121 and SDenseNet models were used to compare with the model proposed in this paper. Compared with ResNet34, the proposed classification model of pneumonia had improved 2.0%, 2.3% and 1.5% in accuracy, sensitivity and specificity respectively. Compared with the SDenseNet network without depthwise separable convolution, number of parameters of the proposed model was reduced by 43.9%, but the classification effect did not decrease. It can be found that the proposed DWSDenseNet has a good classification effect on the COVID-19 chest X-ray images dataset. Under the condition of ensuring the accuracy as much as possible, the depthwise separable convolution can effectively reduce number of parameters of the model.


Subject(s)
Humans , Betacoronavirus , Coronavirus Infections , Diagnostic Imaging , Deep Learning , Pandemics , Pneumonia, Viral , Diagnostic Imaging , X-Rays
17.
Journal of Biomedical Engineering ; (6): 630-640, 2020.
Article in Chinese | WPRIM | ID: wpr-828124

ABSTRACT

In order to overcome the difficulty in lung parenchymal segmentation due to the factors such as lung disease and bronchial interference, a segmentation algorithm for three-dimensional lung parenchymal is presented based on the integration of surfacelet transform and pulse coupled neural network (PCNN). First, the three-dimensional computed tomography of lungs is decomposed into surfacelet transform domain to obtain multi-scale and multi-directional sub-band information. The edge features are then enhanced by filtering sub-band coefficients using local modified Laplacian operator. Second, surfacelet inverse transform is implemented and the reconstructed image is fed back to the input of PCNN. Finally, iteration process of the PCNN is carried out to obtain final segmentation result. The proposed algorithm is validated on the samples of public dataset. The experimental results demonstrate that the proposed algorithm has superior performance over that of the three-dimensional surfacelet transform edge detection algorithm, the three-dimensional region growing algorithm, and the three-dimensional U-NET algorithm. It can effectively suppress the interference coming from lung lesions and bronchial, and obtain a complete structure of lung parenchyma.


Subject(s)
Algorithms , Neural Networks, Computer , Tomography, X-Ray Computed
18.
Frontiers of Medicine ; (4): 450-469, 2020.
Article in English | WPRIM | ID: wpr-827866

ABSTRACT

As a promising method in artificial intelligence, deep learning has been proven successful in several domains ranging from acoustics and images to natural language processing. With medical imaging becoming an important part of disease screening and diagnosis, deep learning-based approaches have emerged as powerful techniques in medical image areas. In this process, feature representations are learned directly and automatically from data, leading to remarkable breakthroughs in the medical field. Deep learning has been widely applied in medical imaging for improved image analysis. This paper reviews the major deep learning techniques in this time of rapid evolution and summarizes some of its key contributions and state-of-the-art outcomes. The topics include classification, detection, and segmentation tasks on medical image analysis with respect to pulmonary medical images, datasets, and benchmarks. A comprehensive overview of these methods implemented on various lung diseases consisting of pulmonary nodule diseases, pulmonary embolism, pneumonia, and interstitial lung disease is also provided. Lastly, the application of deep learning techniques to the medical image and an analysis of their future challenges and potential directions are discussed.

19.
Article | IMSEAR | ID: sea-209972

ABSTRACT

Recently, breast cancer is one of the most popular cancers that women could suffer from. The gravity and seriousness of breast cancer can be evidenced by the fact that the mortality rates associated with it are the second highest after lung cancer. For the treatment of breast cancer, Mammography has emerged as the one whose modality when it comes to the defection of this cancer is most effective despite the challenges posed by dense breast parenchyma. In thisregard, computer-aided diagnosis (CADe) leverages the mammography systems’ output to facilitate the radiologist’s decision. It can be defined as a system that makes a similar diagnosis to the one done by a radiologist who relies for his/her interpretationon the suggestions generated by a computer after it analyzed a set of patient radiological images when making. Against this backdrop, the current paper examines different ways of utilizing known image processing and techniques of machine learning detection of breast cancer using CAD –more specifically, using mammogram images. This, in turn, helps pathologist in their decision-making process. For effective implementation of this methodology, CADe system was developed and tested on the public and freely available mammographic databases named MIAS database. CADe system is developed to differentiate between normal and abnormal tissues, and it assists radiologists to avoid missing breast abnormalities. The performance of all classifiers is the best by using thesequential forward selection (SFS) method. Also, we can conclude that the quantization grey level of (gray-level co-occurrence matrices) GLCM is a very significant factor to get robust high order features where the results are better with L equal to the size of ROI. Using an enormous number of several features assist the CADe system to be strong enough to distinguish between the different tissues

20.
Chinese Journal of Medical Imaging Technology ; (12): 1813-1816, 2019.
Article in Chinese | WPRIM | ID: wpr-861138

ABSTRACT

In recent years, with the continuous development of artificial intelligence related technologies, deep learning (DL) technology kept improving and had become a research hotspot in medical field. Research of DL in medical imaging has brought a new development direction for precise diagnosis, individualized treatment and prognosis evaluation of brain neoplasms. The current status and future development of DL in brain neoplasms medical images were reviewed in this paper.

SELECTION OF CITATIONS
SEARCH DETAIL